1. Creating a Docker Account and Obtain an Access Token
In this lab, you will create a Docker account and obtain an access token. With the free Docker account, you can pull up to 200 Docker images every 6 hours. Without the account, you will most likely run into Docker pull rate-limiting issue where anonymous users get restricted to 100 Docker image pulls every 6 hours and that can cause issues especially when multiple anonymous users connect from the same network.
1.1. Create a Docker account
IMPORTANT NOTE!!!
If you have been provided with a Docker account, skip to next PART.
In this part, you will create a Docker account.
-
Open a browser and navigate to the following URL:
-
Note: If you get prompted for password, use wasadmin
-
Click Sign Up and create a new account.
-
Sign in with your newly created account.
-
Click on your username in the top right corner and select Account Settings.
Alternatively, click navigate to https://hub.docker.com/settings/general to access Account Settings.
-
Select Security.
-
Click New Access Token.
-
Add the token description, such as docker & Kubernetes labs.
-
Keep Access permissions as default (Read, Write, Delete).
-
Click Generate
Notice it shows the following information on the page:
Make a note of both the steps displayed on the page.
1.2. Log in with your Docker account
-
Connect to the Virtual machine called "*VM_WA3007_REL_4_0**"*
-
Open a new Terminal.
-
Run the following command to log into Docker:
docker login -u \{your-docker-id} -p \{your-access-token}You should see a 'Login Succeeded' message.
-
Enter exit many times until the Terminal is closed.
2. Configuring Minikube/Kubernetes to Use a Custom Docker Account
In this lab, you will configure the Docker account/access token in Minikube. This will ensure Minikube/Kubernetes uses your custom Docker account to pull Docker images. With the free Docker account, you can pull up to 200 Docker images every 6 hours. Without the account, there is a risk you will run into Docker pull rate-limiting issue where anonymous users get restricted to 100 Docker image pulls every 6 hours and that can cause issues especially when multiple anonymous users connect from the same network.
2.1. Configure the Docker account in Minikube/Kubernetes
Make sure you run the following command:
docker login -u \{your-docker-id} -p \{your-access-token}
In this part, you will configure your Minikube/Kubernetes cluster to use the Docker account you configured earlier in the course.
-
Open a browser and connect to the docker page to make sure you have internet connection. Note: If you get prompted for password, use wasadmin:
www.docker.com -
Open a Terminal window.
-
Ensure you have the following Docker configuration file:
cat ~/.docker/config.jsonIf you don’t see any contents, or it shows file not found, ensure you have executed “docker login” as mentioned in the note earlier in this lab.
-
Set Docker as the Minikube driver:
minikube config set driver docker -
For the driver change to take effect, run delete command (Minikube might complain that no profile exists, which is fine. Ignore the warning/error if it shows up):
minikube delete -
Now start Minikube. This step can take a couple of minutes as Minikube downloads the right image from Docker Hub.
minikube start -
Verify that Minikube is running as a Docker container. You should see one container listed with the name of "minikube"
docker ps -
Run the following four commands one after another to copy the Docker configuration to the right places within Minikube:
minikube ssh "mkdir -p .docker" minikube cp ~/.docker/config.json minikube:/home/docker/.docker/ minikube cp ~/.docker/config.json minikube:/tmp/ minikube ssh "sudo cp /tmp/config.json /var/lib/kubelet/" -
Restart the cluster for changes to take effect. Run these one after another:
minikube stop minikube start -
The above steps ensure that the Kubelet inside Minikube uses your Docker Hub credentials whenever downloading Docker images for the subsequent lab exercises.
-
Exit the Terminal and close the browser.
3. Getting Started with Kubernetes
The main terminal-based tool for working with Kubernetes is the kubectl command. With it you can deploy objects like deployments, pods and services and see what is going on inside the cluster. Another important tool is a GUI web interface called the Kubernetes Dashboard. It provides much of the same functionality as kubectl but does so in a way that for some operations is easier to navigate and operate. In this lab we will take a look at both of these tools and learn some of the things you can do with them.
-
kubectl Command
-
Kubernetes Dashboard
Make sure you run the following command:
docker login -u \{your-docker-id} -p \{your-access-token}
3.1. Setting the Stage
-
Open a new Terminal window.
-
Ensure you can access the Docker CLI by running the following:
docker ps
3.2. Interact with the Cluster
In this part, you will interact with the Minikube Kubernetes cluster in various ways.
-
Ensure that the cluster is running:
minikube statusYou should see:
If the cluster shows as not running, run the following (otherwise skip this step).
minikube startOnce the cluster starts, check status again as mentioned in the previous step.
-
Enter the following to enable ingress functionality:
minikube addons enable ingressYou should see something similar to the following screenshot (the versions might be different):
-
Enter the following to get the cluster’s IP:
minikube ipYou should see the following (yours may be different):
-
Get Kubernetes cluster information:
kubectl cluster-infoIt will display the IP address and port where the Kubernetes master is running. Yours will be different.
-
Create a working directory called Works inside your home directory:
mkdir ~/Works -
Navigate to the working directory:
cd ~/WorksCheck the command prompt. It should be:
wasadmin@<machine-id>:~/Works$
3.3. The kubectl command line interface (CLI)
The 'kubectl' command line interface is one of two main tolls available for you to configure and manage a Kubernetes cluster. The other tool is the Kubernetes Dashboard.
In this lab we will review a few of the high level kubectl resource commands you might expect to use on a daily basis including:
-
explain
-
create
-
apply
-
get
-
delete
These commands work with most K8s resources like: namespaces, nodes, pods, deployments, services, etc.
Lets start by checking the cluster for any existing deployments. For this we will use the 'Get' command. The syntax of this command is kubectl get {resource-type}
-
Run the following get command:
kubectl get deploymentsYour cluster shouldn’t show any existing deployments.
If it does then the output would look something like this:
NAME READY UP-TO-DATE AVAILABLE AGE nginx-web 1/1 1 1 20hIf you have any existing deployments lets go ahead and delete them. You will use the kubectl delete {resource-type} {resource-name} command. Run the following delete command, make sure to substitute the name that was output from the earlier get command:
kubectl delete deployments nginx-webRepeat the above command for each deployment you found in your get listing.
Another important command is apply. It is used to create and update resources and has the following syntax:
kubectl apply -f \{resource-definition-filename} -
To use the apply command, we need a resource definition file. Lets create one using the VSCode editor:
code myapp.yaml -
Enter the following text into the file. You are welcome to use VSCode’s ability to help you with indentations to ensure the YAML is syntactically correct:
apiVersion: apps/v1 kind: Deployment metadata: labels: app: apache name: apache spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: containers: - name: httpd image: httpd:latest ports: - containerPort: 80 -
Save and close VSCode.
But what do all of those settings mean? We can get a basic understanding by using the 'kubectl explain' command which has this syntax:
kubectl explain \{resource-type} -
Try running the following explain command:
kubectl explain deploymentsThe output of the command is:
-
Its also possible to drill down on this information. For example what about if we wanted to know more about the fields under "metadata:"? Try running the following command:
kubectl explain deployments.metadataThere are a lot of possible fields that could go under 'metadata' so the output of the above command is long. The first part of the output looks like this:
Now that we better understand the deployment yaml file lets use it to create a deployment.
-
Execute the following apply command to create a deployment based on the yaml file we just created:
kubectl apply -f myapp.yaml -
Check that the deployment was created:
kubectl get deploymentsThe deployment 'apache' should be in the list and the AVAILABLE field should say '1'. If if shows 0/1 wait a minute and run the command again until 1/1 is shown.
Another way to deploy an application is to use the create command. It is less flexible than apply but it does come in handy when you need to create a quick deployment.
-
Deploy the 'alpine' docker image using the following create command:
kubectl create deployment nginx --image nginx:latest -
Check deployments:
kubectl get deploymentsYou may need to wait a minute until nginx is running. The output should look like this:
Next we will look at the kubernetes dashboard. Leave the deployments running.
3.4. Kubernetes Dashboard
Another API tool is the Kubernetes dashboard. The dashboard is a GUI web application that makes it easy to check how the cluster is working.
-
Run the following command to start the dashboard:
minikube dashboardAfter a few moments it should output the following to the console.
-
Typically, the browser will launch automatically. If not, look for the URL in the terminal output and browse to it in your browser manually.
-
There are some known Minikube issues that result in warnings even though the dashboard launches successfully. You might see these errors in the terminal. Ignore them.
-
The dashboard app looks like this:
-
The app has a navigation area to the left of the screen with links. Click on the deployments link under Workloads.
The main part of the screen should change to show your deployments:
-
Click the nginx deployment. This will show you the deployment details.
-
Scroll down to see more sections.
-
Click on the Edit resource button at the upper right of the screen.
This will pop up an edit screen with the deployments information in it. From this screen you can edit and apply updates to the deployment.
-
Click Cancel to dismiss the edit window.
-
Scale, Edit and Delete resources are also available at the top right of the window.
Feel free to try out some of the other links. As you become more familiar with what’s available in Dashboard and as you work more with Kubernetes you may find some operations are easier to execute from the dashboard’s GUI than they are from a terminal using kubectl commands.
-
Close the browser.
-
In the Terminal, stop the command by pressing CTRL+C.
-
Type exit to close the Terminal.
3.5. Review
In this lab we looked at two tools for interacting with Kubernetes clusters kubectl CLI and the Kubernetes Dashboard.
4. Building a Docker Image with Dockerfile
Docker is an open-source containerization solution. Docker containers can run on any OS in an on-prem environment and also on any cloud platform. In this lab, you will create a custom Docker image by creating a Dockerfile.
Make sure you run the following:
docker login -u \{your-docker-id} -p \{your-access-token}
4.1. Setting the Stage
-
Open a new Terminal window.
-
Ensure you can access the Docker CLI by running the following:
docker ps
4.2. Learning the Docker Command-line
Get quick information about Docker by running it without any arguments.
-
Run the following command:
docker | less -
Enter q to exit.
The commands list is shown below for you reference.
attach
Attach local standard input, output, and error streams to a
build
Build an image from a Dockerfile
commit
Create a new image from a container’s changes
cp
Copy files/folders between a container and the local filesystem
create
Create a new container
diff
Inspect changes to files or directories on a container’s
events
Get real time events from the server
exec
Run a command in a running container
export
Export a container’s filesystem as a tar archive
history
Show the history of an image
images
List images
import
Import the contents from a tarball to create a filesystem image
info
Display system-wide information
inspect
Return low-level information on Docker objects
kill
Kill one or more running containers
load
Load an image from a tar archive or STDIN
login
Log in to a Docker registry
logout
Log out from a Docker registry
logs
Fetch the logs of a container
pause
Pause all processes within one or more containers
port
List port mappings or a specific mapping for the container
ps
List containers
pull
Pull an image or a repository from a registry
push
Push an image or a repository to a registry
rename
Rename a container
restart
Restart one or more containers
rm
Remove one or more containers
rmi
Remove one or more images
run
Run a command in a new container
save
Save one or more images to a tar archive (streamed to STDOUT by
search
Search the Docker Hub for images
start
Start one or more stopped containers
stats
Display a live stream of container(s) resource usage statistics
stop
Stop one or more running containers
tag
Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top
Display the running processes of a container
unpause
Unpause all processes within one or more containers
update
Update configuration of one or more containers
version
Show the Docker version information
wait
Block until one or more containers stop, then print their exit
-
You can get command-specific help by using the --help flag added to the commands invocation line, e.g. to list containers created by Docker (the command is called ps), use the following command:
docker ps --helpMore information on Docker’s command-line tools can be obtained at https://docs.docker.com/reference/commandline/cli/
-
Enter the following command:
docker imagesThis command displays docker images available on the machine.
4.3. Create Dockerfile for Building a Custom Image
In this part, you will create a Dockerfile which will build a custom Docker image.
-
In the Terminal, navigate to the Works directory you created earlier:
cd ~/Works/ -
Download the sample Java application by using the following wget command:
wget https://course-sw.s3.amazonaws.com/sample-webapp.zip -
Extract the contents of the sample Java application:
unzip sample-webapp.zip -
Switch to the sample-webapp directory:
cd sample-webapp -
Check the content:
lsYou should see:
-
Type the following command to create Dockerfile:
code Dockerfile -
Enter following code:
FROM maven:3.6-jdk-8 AS builder COPY . /app WORKDIR /app RUN mvn install FROM openjdk:8-jdk-alpine ARG TARGET=app/target COPY --from=builder $\{TARGET}/ /app/target RUN ls /app/target EXPOSE 8080 ENTRYPOINT ["java","-jar","./app/target/sample-webapp-1.0.jar"]The Dockerfile script is a 2-stage script. The first stage will obtain a base image from Docker Hub that will already have Maven and JDK8 preinstalled. After downloading the base image, your custom Java application source will be copied to it and Maven will build your application to generate the deployable artifact (.jar). The second stage will use a very small base image that already has all the dependencies required to run your custom Java application. -
Click Save button.
-
Close VSCode.
-
Type
lsand verify Dockerfile new file is there. -
Run following command to build a custom image:
docker build -t sample-webapp:v1.0 .Wait until the process is completed, at the end of the messages you should see:
4.4. Verify the Custom Image
In this part you will create a container based on the custom image you created in the previous part. You will also connect to the container, verify the jar file for the application exists, and execute the jar file.
-
In the terminal, run following command to verify the custom image exists:
docker images | grep sample sample-webapp v1.0 is shown: -
Create a container based on the above image and connect to it:
docker run -d --name sample-webapp-container -p 8080:8080 sample-webapp:v1.0Note: docker run is the short form of docker container run.
-d runs the container in background as a daemon.
--name assigns a name to the container. You can create multiple containers based on the same image. Name can be utilized with commands to stop, start, and remove containers.
-p lets you perform port mapping. The number in front of the colon (:) symbol is where the port will be available.
The command will show a message similar to this:
Troubleshoot:
In case you make a mistake in the command and when you are running the command again it shows that the container already exists, then you have to stop it and remove it using the following commands:
docker stop sample-webapp-container docker rm sample-webapp-containerand then run again the command to create the container.
-
Open a web browser window and navigate to the following URL:
http://localhost:8080 -
Verify the following page shows up:
4.5. Interacting with the Container
In this part, you will check container logs and execute commands in the container.
-
Run the following command to check container logs:
docker logs sample-webapp-containerYou can inspect logs to troubleshoot your application running in a container. If the application in a container crashes, the container will crash as well.
-
Run the following command to inspect the Java version in the container:
docker exec -it sample-webapp-container java -version -
Run the following command to inspect the directory structure and verify your sample web application is deployed:
docker exec -it sample-webapp-container ls app/targetNotice sample-webapp-1.0.jar is available in the container. ls app/target command got executed in the container. You could you cat or use any other command to view contents of any file available in the container.
4.6. Stop and Delete the Container
In this part, you will stop and delete the docker container you created in the previous parts of the lab.
-
Stop the container:
docker stop sample-webapp-containerYou can also obtain the container ID by using “docker ps -a” and use the ID instead of the container name. -
Destroy the container:
docker rm sample-webapp-containerYou can also delete an unused docker image by running “docker rmi <image_name>:<tag>
-
Type exit until the Terminal is closed.
-
Close all.
4.7. Review
In this lab, we reviewed the main Docker command-line operations, created a Dockerfile script, and ran it to build a Docker image to containerize a custom Java application.
5. Deploying to Kubernetes
Kubernetes is an open-source container orchestration solution. It is used for automating deployment, scaling, and management of containerized applications. In this lab, you will explore the basics of Kubernetes. You will use the Minikube distribution of Kubernetes which allows you to create a Kubernetes environment with ease. Minikube is a single-node Kubernetes cluster. In more advanced cases, you will want to use OpenShift to set up a multi-node cluster.
Make sure you run the following command:
+
docker login -u \{your-docker-id} -p \{your-access-token}
5.1. Setting the Stage
-
Open a new Terminal window.
-
Ensure you can access the Docker CLI by running the following:
docker ps -
In the Terminal, navigate to the Works directory you created earlier:
cd ~/Works/ -
Check minikube status:
minikube statusYou should see that minikube is Running:
If minikube is not running then start it using this command:
minikube start
5.2. Deploy a Custom Image and Expose the Service
In this part, you will manually deploy a custom docker image and verify your service is working in Kubernetes.
-
Switch to the sample-webapp directory:
cd ~/Works/sample-webapp -
Minikube has a separate Docker daemon from the host lab machine. When we build the Docker image in the previous lab, it was cached into the environment of the host machine. In this step, we will switch the environment. Run the following in the terminal:
eval $(minikube -p minikube docker-env) -
Run docker images and notice how the list of images now is very different. You should see all the Kubernetes component images listed. If you search, you won’t see the sample-webapp, since it was not cached in this environment.
-
Next rebuild the Docker image so it caches in the Minikube Docker environment:
docker build -t sample-webapp:v1.0 . -
Create a new application deployment using this freshly created Docker image:
kubectl create deployment sample-webapp --image=sample-webapp:v1.0 -
Get the deployment list:
kubectl get deploymentsIt might take a while before the READY status shows 1/1 for sample-webapp. You may need to wait for a few minutes and run the command again until it shows 1/1.
-
Get the pod list:
kubectl get podsIt might take a while before the pod READY status shows 1/1. You may need to wait for a few minutes and run the command again until it shows 1/1.
-
In the terminal, run the following command to see exposed services:
kubectl get servicesYou won’t see your custom node-app service since it’s not exposed.
-
In the terminal, run the following command to expose the service:
kubectl expose deployment sample-webapp --port 8080You can define custom port mapping by using --target-port=<internal_port> --port <external_port> -
Get the services list:
kubectl get services -
Notice how now you have a CLUSTER-IP assigned to the sample-webapp service:
-
Take note of the ClusterIP. In this case, it is 10.107.205.172. Yours will be different.
-
Since this is an internal address, you won’t be able to access it from your host machine directly (we will look into it in the Services lab later on in the course). For now, you can ssh into the cluster and then use curl to see if the webpage is up and running.
-
Use the following to ssh into the Minikube cluster:
minikube ssh -
Now use curl to see if the website is up and running. Use the ClusterIP you noted earlier. Also, use port 8080 for this.
curl http://10.107.205.172:8080 -
You should see the html of the home page in the terminal, like this:
-
Exit the ssh session by typing exit at the prompt. This will take you back to your main terminal session.
-
Finally, let’s revert our host machine’s Docker daemon to the original host machine Docker environment.:
eval $(minikube docker-env -u) -
If you run docker images now, you should notice the original list of images.
5.3. Exploring the Dashboard
In this part, you will start the Minikube dashboard and explore some of the steps that you previously performed from the terminal.
-
In the terminal, run the following command to launch the Minikube dashboard:
minikube dashboardThe browser should launch automatically with the dashboard. If that doesn’t happen, look for the URL in the terminal output and use that to launch the dashboard manually by CTRL+Clicking on it:
-
On the left side of the dashboard, click Deployments.
Verify you have the sample-webapp deployment that you deployed from the terminal window. You may see more deployments.
-
On the left side of the dashboard, click Pods.
Verify you have the sample-webapp pod. Your pod id will be different.
-
On the left side of the dashboard, click Services.
Verify you have the sample-webapp service
-
Exit the browser and go back to the terminal. Enter Ctrl+C to end the Dashboard application.
5.4. View Logs and Run Commands in a Container
In this part, you will view logs and run commands in a container.
-
In the terminal window, run the following command to get the pod list:
kubectl get pods -
Make a note of the sample-webapp pod name:
-
In the terminal, run the following command to view logs of the container running in a pod:
kubectl logs <pod_name>Notice the command displays the logs from the application running in the container of the pod specified in the command.
-
Run the following command to get Java version in the container:
kubectl exec -it <pod_name> -- java -versionEnsure there’s a space in between — and java
-
Run the following command to view directory structure and verify the application is deployed:
kubectl exec -it <pod_name> -- ls app/targetEnsure there’s a space in between — and ls.
5.5. Delete the Service and Deployment
In this part, you will delete the sample-webapp service and deployment.
-
In the Kubernetes terminal, run the following command to delete the sample-webapp service:
kubectl delete service sample-webapp -
Run the following command to delete the deployment:
kubectl delete deployment sample-webapp -
Verify the service and deployment are deleted:
kubectl get deployments kubectl get pods kubectl get servicesNOTES: You might see a service named Kubernetes – this is a system service and is fine.
5.6. Redeploy the Custom Image and Expose the Service using a YAML Configuration File
In this part, you will redeploy the custom image and expose the service using a YAML configuration file. YAML configuration is the preferred way to deploy images and configure services. You can check in the YAML files to a source control repository, such as Git, to version the changes.
-
View the YAML configuration file:
cat deployment.yaml | more# represents comments. # The first section deploys sample-webapp:v1.0 image to the Kubernetes cluster.
apiVersion: apps/v1 kind: Deployment metadata: name: sample-webapp namespace: default spec: replicas: 1 selector: matchLabels: app: sample-webapp template: metadata: labels: app: sample-webapp spec: containers: - image: sample-webapp:v1.0 name: sample-webapp ---# The second section exposes the deployed application as a service.
apiVersion: v1 kind: Service metadata: labels: app: sample-webapp name: sample-webapp namespace: default spec: ports: - port: 8080 selector: app: sample-webapp -
Run the following command to perform the deployment and expose the service:
kubectl apply -f deployment.yamlNotice it displays the following message:
-
Get the services list:
kubectl get services -
You should see the service is up and running.
-
Optional Extra Credit Step: Follow the steps detailed in Part 2 to ssh into the Minikube cluster and verify that the website is running using the curl command.
-
Delete the service and deployment by running the following command:
kubectl delete -f deployment.yamlNotice both the service and deployment are deleted.
5.7. Clean-Up
-
Close all open browser windows.
-
Switch to the Dashboard Terminal and press Ctrl+C to stop any processes running.
-
Type exit many times in all Terminals to close them.
5.8. Review
In this lab, you learned the basics of Kubernetes with minikube and kubectl.
6. Implementing the Sidecar Pattern
In this lab we will use Kubernetes to implement a pod that uses the sidecar pattern. The pod will have two containers:
-
Application Container - The application will be a simple script that appends some data to a log file every few seconds.
-
Sidecar Container - The application in the sidecar will be an nginx server that provides http access to the file that the application container’s app is logging to.
Instead of providing HTTP access a real-world sidecar might copy the log file to a log aggregation server.
Make sure you run the following command:
docker login -u \{your-docker-id} -p \{your-access-token}
6.1. Setting the Stage
-
Open a new Terminal window.
-
Ensure you can access the Docker CLI by running the following:
docker ps -
In the Terminal, navigate to the Works directory you created earlier:
cd ~/Works/ -
Check minikube status:
minikube statusYou should see that minikube is Running:
If minikube is not running then start it using this command:
minikube start
6.2. Create an Application Pod
First we will create an application pod. The application in this pod will save data to a log file every five seconds. In this section we will create the application pod in kubernetes and shell into it to observe the work it does.
-
Create a file using VSCode:
code application-pod.yaml -
Add the following content:
# application-pod.yaml apiVersion: v1 kind: Pod metadata: name: application-pod spec: containers: - name: app-container image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do date >> /var/log/app.log; sleep 5;done"] -
Save the file and close the editor.
-
Create the application pod in kubernetes:
kubectl apply -f application-pod.yaml -
Check that the pod was created:
kubectl get podsYou should get the following output:
-
Describe the pod:
kubectl describe pods application-podThe command produces a large amount of output. Scroll through the output until you see the "containers" section. It should look like this:
The pod has a single container named 'app-container' hosting an ubuntu Docker image and running a single line script (see Args) that appends the date to an app.log file. Next, we will check this log file to verify that the application is working.
-
Shell into the app-container using the following command:
kubectl exec application-pod -c app-container -it -- /bin/bashThe prompt will change to:
root@application-pod:/# -
Check that the app.log file exists:
ls -l /var/log/app.logYour output should look similar to this:
-rw-r--r-- 1 root root 493 Dec 23 20:20 /var/log/app.log -
Try running the list command a few more times. You should notice that the size of the app.log file is larger each time:
ls -l /var/log/app.log -
Use the linux tail command to take a look at the app.log file’s contents:
tail -f /var/log/app.logThe output should look similar to this:
The tail command displays the most recent lines from the end of the app.log file. If you wait a few moments you will see new dates as they are appended to the file by the application’s script. This verifies that the application is working.
We would like a way to check this log without having to shell into the application container. To do that we will develop a sidecar container.
-
Hit Ctrl-C to stop the tail command.
-
Exit the container shell using the following command:
exit
6.3. Add a Second Container
In this section we will add a second container to the pod. The second container will hold an nginx web server.
-
First, delete the current deployment. The deletion might take a few moments. Wait for it to complete before moving on:
kubectl delete -f application-pod.yaml -
Open the application-pod.yaml again in VSCode:
code application-pod.yaml -
Add the content shown below:
... - name: sidecar-container image: nginx:latest ports: - containerPort: 80 -
Save the file and close the editor. Be careful with the indentation.
-
Create the application pod in kubernetes using the updated yaml file:
kubectl apply -f application-pod.yaml -
Check out the pods:
kubectl get podsYou should get the following output:
READY should now say 2/2 meaning 2 out of 2 containers are running. Wait and repeat the command until you see 2/2.
-
Describe the pod:
kubectl describe pods application-pod -
Scroll through the output until you see the "containers" section.
You can see the second container here named "sidecar-container". It is running an nginx image. Let’s verify that the nginx server is working. We will do that by shelling into the second container.
-
Shell into the sidecar-container using the following command:
kubectl exec application-pod -c sidecar-container -it -- /bin/bashThe prompt will change to:
root@application-pod:/# -
Use the linux curl command to make a call to the nginx server.
curl http://localhost/index.html -
The first few lines of output should be:
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...This is the nginx welcome page. If the server were not running, we would have received a 'Connection refused" message instead. This verifies that the nginx server is running in the second container.
At this point both containers are running but the nginx server does not have access to the app.log file which is on the first container’s file system. Next, we will use a shared volume to solve this issue.
-
Exit the container shell using the following command:
exit
6.4. Create and Use a Shared Volume
For our design to work we need the application to place its log file somewhere that the nginx server can access it. We can use a shared volume for this. This will involve three steps:
-
Add a shared volume
-
Setup the app-container to use the shared volume
-
Setup the sidecar-container to use the shared volume
All of this can be done in the yaml file.
-
First, delete the current deployment. The deletion might take a few moments. Wait for it to complete before moving on:
kubectl delete -f application-pod.yaml -
Edit the application-pod.yaml in VSCode:
code application-pod.yaml -
Add the following in bold after the 'spec:' line and before the 'containers:' line:
spec: volumes: - name: shared-logs emptyDir: \{} containers:This establishes a shared volume called 'shared-logs'.
-
Add the following code in bold to the app-container after the 'args' line and before the '- name: sidecar-container' line:
args: ["-c", ...] volumeMounts: - name: shared-logs mountPath: /var/log - name: sidecar-containerThis mounts the shared-logs volume to the /var/log directory in the app-container. Remember the application in that container is appending to /var/log/app.log. Now any file in that directory, including the app.log, will be saved to the root of the shared volume.
-
Add the following code in bold at the end of the sidecar-container after the '- containerPort' line:
ports: - containerPort: 80 volumeMounts: - name: shared-logs mountPath: /usr/share/nginx/htmlThese lines tell kubernetes to mount the shared-logs volume to the /usr/share/nginx/html directory in the sidecar-container. The mount point is the same directory nginx serves files from. That means that any file in the root of the shared-logs volume is available to be served by nginx - including the app.log.
-
Save the file.
-
Verify that you have the right indentation in the file as shown below or it won’t run:
-
Close the editor.
6.5. Deploy and Test the App and Sidecar
In this part we will update the application again. At this point it includes the app-container and sidecar-container with both accessing the app.log file from a shared volume. In testing we should see the contents of this file changing as we access it through the nginx server.
-
Create the application pod in kubernetes using the updated yaml file:
kubectl apply -f application-pod.yaml -
Check out the pods:
kubectl get podsYou should get the following output:
READY should say 2/2 meaning both containers are running.
-
Shell into the sidecar-container using the following command:
kubectl exec application-pod -c sidecar-container -it -- /bin/bashThe prompt will change to:
root@application-pod:/# -
Use the linux curl command to make a call to the nginx server.
curl http://localhost/app.log -
The output should be similar to this:
If you run the command a few times you will see the time on the last line is changing which shows that new dates are being appended.
We have successfully implemented a sidecar container that accesses a log file used by the application in the app-container.
-
Exit the sidecar-container shell using the following command:
exit -
Delete the deployment from the terminal (make sure you exited the sidecar-container shell first):
kubectl delete -f application-pod.yaml -
Close Terminal by typing exit.
6.6. Review
In this lab we implemented a pod that uses the sidecar pattern in Kubernetes. Our main application created logs while the sidecar container provided HTTP access to the log file. As implemented, we still had to shell into the sidecar to access the log file. Adding a Kubernetes service would complete the design by allowing us to access the sidecar from outside the Kubernetes cluster.
7. Deploying Applications
In this lab, you will see how to Deployment Kubernetes applications. You will also perform various operations on the deployment such as upgrade, pause, resume, and scale.
Make sure you run the following command:
docker login -u \{your-docker-id} -p \{your-access-token}
7.1. Setting the Stage
-
Open a new Terminal window.
-
Ensure you can access the Docker CLI by running the following:
docker ps -
In the Terminal, navigate to the Works directory you created earlier:
cd ~/Works/ -
Check minikube status:
minikube statusYou should see that minikube is Running:
If minikube is not running then start it using this command:
minikube start
7.2. Create a Deployment
-
Use VSCode to create a deployment manifest file. This command will create the file and bring it up in edit mode:
code nginx-deployment.yaml -
Add the following contents to the file.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 -
Save the file.
-
Close the editor.
-
Create the Deployment by running the following command:
kubectl apply -f nginx-deployment.yamlYou should see:
deployment.apps/nginx-deployment created -
Run the following command to verify the Deployment was created:
kubectl get deploymentsYou should see:
If you don’t get the above result then it might take a few minutes for the nginx image to get downloaded, wait and don’t move to the next step until you see 3/3. -
To see the Deployment rollout status, run the following command:
kubectl rollout status deployment.v1.apps/nginx-deploymentThe "deployment.v1.apps" prefix on the deployment name specifies the API version (v1.apps). You could also simply use "deployments/nginx-deployment". API versions allow Kubernetes to introduce changes without breaking existing clients. Different versions can have different functionality. One API server can support multiple API versions simultaneously. You should see:
-
To view the details of the deployment, run the following command:
kubectl get deployment nginx-deployment -o yaml -
Labels are automatically generated for each Pod, to see what they are run:
kubectl get pods --show-labels
7.3. Update a Deployment
-
Run the following command to check the nginx version image used by the current Deployment:
kubectl describe deployment nginx-deploymentNotice image is displayed as 1.7.9
-
Open the Deployment manifest for editing:
code nginx-deployment.yaml -
Change image version from 1.7.9 to 1.9.1
-
Change replicas count from 3 to 4
-
Save and close the editor.
-
Apply the updates:
kubectl apply -f nginx-deployment.yamlYou will see:
deployment.apps/nginx-deployment configured -
Check rolling update status:
kubectl rollout status deployment.v1.apps/nginx-deploymentYou will see:
deployment "nginx-deployment" successfully rolled outIt might take a few minutes for all replicas to get updated to the new nginx version. -
View the Deployment:
kubectl get deploymentsEnsure there are 4 replicas.
-
Verify nginx version is updated to 1.9.1:
kubectl describe deployment nginx-deploymentEnsure Image version is 1.9.1
7.4. Roll Back a Deployment
In this part, you will revert an update. One use-case where it can be useful is when you try to upgrade your Deployment to a version that doesn’t exist. You can rollback such a deployment to make your application functional again by reverting to the previous version.
-
Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.91 instead of nginx:1.9.1. Enter the following into the terminal:
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91You should see:
deployment.apps/nginx-deployment image updated -
The rollout gets stuck. You can verify it by checking the rollout status:
kubectl rollout status deployment.v1.apps/nginx-deployment -
Press Ctrl+C to stop the above rollout status watch.
-
Run the following command to get pod list:
kubectl get podsLooking at the Pods created, you see that Pod(s) created by the new ReplicaSet is stuck in an image pull loop.
-
Undo the recent change:
kubectl rollout undo deployment.v1.apps/nginx-deploymentYou should see:
deployment.apps/nginx-deployment rolled back -
Verify the invalid pods are removed:
kubectl get podsNotice that the problematic pods are either in "Terminating" state or have completely disappeared after successfully terminating.
-
Scale a deployment by using imperative command:
kubectl scale deployment.v1.apps/nginx-deployment --replicas=1You should see:
deployment.apps/nginx-deployment scaled -
Verify scaling is configured:
kubectl get deployments nginx-deploymentYou should see:
7.5. Pausing and Resuming a Deployment
You can pause a Deployment before triggering one or more updates and then resume it. This allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
-
Pause the Deployment by running the following command:
kubectl rollout pause deployment.v1.apps/nginx-deployment -
While the rollout is paused, set the Deployment image to a different version:
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.2 -
Run the following command to verify the image version:
kubectl rollout status deployment.v1.apps/nginx-deploymentThis command will keep showing you “Waiting for deployment” since the rollout is paused. -
Press Ctrl+C to exit out to the terminal.
-
Resume rollout:
kubectl rollout resume deployment.v1.apps/nginx-deployment -
Verify the rollout status:
kubectl rollout status deployment.v1.apps/nginx-deploymentYou may have to wait a few seconds for the command to be completed.
-
Verify the Deployment image version:
kubectl describe deployment nginx-deploymentThe Image version should show as 1.9.2
7.6. Clean-Up
-
Delete your Deployment:
kubectl delete deployments/nginx-deployment -
Verify the deployment is deleted:
kubectl get deployments -
Verify the nginx-deployment pods are deleted, you may see other pods:
kubectl get podsIt might take a few moments for the pods to get deleted. You may see other pods left. -
Type exit to exit the Terminal.
7.7. Review
In this lab, you Deployed Kubernetes applications and performed various operations such as upgrade, pause, resume, and scale on the Deployment.
8. Implementing RBAC Security
In this lab, you will implement security using RBAC. You will see how to create users with basic security and also with certificates. You will explore how to create contexts so you can switch between users. You will also implement authorization by using Role and RoleBinding. A role is essentially the permissions that you want to grant and a role binding is assigning the permissions to a specific user or group.
Make sure you run the following command:
docker login -u \{your-docker-id} -p \{your-access-token}
8.1. Setting the Stage
-
Open a new Terminal window.
-
Ensure you can access the Docker CLI by running the following:
docker ps -
In the Terminal, navigate to the Works directory you created earlier:
cd ~/Works/ -
Check minikube status:
minikube statusYou should see that minikube is Running:
If minikube is not running then start it using this command:
minikube start
8.2. Downloading and Prepping the Security Project
In this part, you will download the security project from the web and unzip it.
-
Make sure you are in the Works folder:
cd ~/Works -
Download the security project by using the following wget command::
wget https://course-sw.s3.amazonaws.com/ecurity.zip -
Open a browser.
-
Enter the following link to download the application:
https://course-sw.s3.amazonaws.com/security.zip -
Extract the contents of security.zip:
unzip security.zip -
Change to the security subdirectory:
cd security -
Verify the files were extracted:
lsYou should see the files that you just extracted:
8.3. View the Cluster Configuration and Access the API Server
In this part, you will view the Kubernetes configuration and access the API server using cURL. In a real-world scenario, you won’t use cURL. Instead, you will use kubectl and various other commands to communicate with the API server.
-
Run the following command to view Kubernetes configuration:
kubectl config viewThe above command displays the contents of ~/.kube/config file.
Notice by default, there’s just one user/principal, minikube in this case, and there’s one context. A context represents a combination of Kubernetes cluster and user/principal that can connect to a cluster.
The file ca.crt represents the CA used by the cluster and the files, client.crt, and client.key, maps to the user minikube. Kubectl uses these certificates and keys from the current context to encode the request.
-
To add a new user/principal using a plain-text password (basic security), run the
following command:kubectl config set-credentials alice --password=abc123 --username=alice -
View the Kubernetes configuration:
kubectl config viewNotice there are now 2 users. Alice is added as a user with a redacted plain-text password whereas the default user is added with a certificate.
8.4. Create a Certificate Signing Request and Verify the CSR
In the next few parts, you will create another user/principal. This time you will use a key and certificate. In this part, you will create CSR and verify it. The CSR will be assigned a user/principal, Joe, later in the lab.
-
Run the following command to generate a private key for Joe:
openssl genrsa -out joe.key 2048 -
Run the following command to create a Certificate Signing Request from the private key:
openssl req -new -key joe.key -out joe.csr -subj "/CN=joe/O=sales"\n -
View Kubernetes configuration and locate the directory where certificates are stored:
kubectl config viewMake a note of the folder as highlighted in the screenshot.
-
You will use the folder you noted in the previous part quite often throughout this lab. Let’s create a variable and store the location, enter the following command:
LOCATION=/home/wasadmin/.minikube -
Generate the final certificate employee.crt by approving the certificate sign request, joe.csr, you made earlier:
openssl x509 -req -in joe.csr -CA $LOCATION/ca.crt -CAkey $LOCATION/ca.key -CAcreateserial -out joe.crt -days 500[Enter the command in 1 line]
You should see:
Certificate request self-signature ok subject=CN = joe, O = salesn -
Verify the joe.crt file contains the certificate issued by Kubernetes:
cat joe.crtYou should see:
-----BEGIN CERTIFICATE----- <YOUR CERTIFICATE CONTENT> -----END CERTIFICATE-----
8.5. Create a User/Principal with the Key and Certificate
In this part, you will create a Kubernetes user/principal with the key and certificate you generated in the previous parts of the lab.
-
Add Joe as a user to Kubernetes:
kubectl config set-credentials joe --client-certificate=joe.crt --client-key=joe.key[Enter the command in 1 line]
Verify it shows the following message:
User "joe" set. -
View Kubernetes configuration and verify Joe is listed:
kubectl config viewNotice it shows the following:
- name: joe + user: + client-certificate: <YOUR_WORKING_DIRECTORY>/security/joe.crt + client-key: <YOUR_WORKING_DIRECTORY>/security/joe.key
8.6. Create Contexts to Easily Switch Between Users
In this part, you will create contexts so you can easily switch between the users. You have two users: alice and joe. Unlike OpenShift that let you use the oc login command to login as a different user, Kubernetes relies on contexts and the kubectl command.
Before creating the contexts, let’s create a namespace. Namespaces are like projects that let you group related resources together. For example, order-management-system namespace could hold order, shipping, and invoicing deployments (pods + services). You can grant authorization at a namespace level and assign project-specific users.
-
Create a Sales namespace:
kubectl create namespace sales -
Verify the namespace has been created:
kubectl get namespacesYou should see:
Let’s perform a deployment so that we can later verify if we can see it as a certain user, such as joe and alice.
-
Let’s ensure that any previous deployment called nginx is removed. Run the following. You will either get a confimation of the delete or an "not found" error if it didn’t exist. Either way is fine.
kubectl delete deployment nginx -
Deploy a sample image to the sales namespace:
kubectl create deployment nginx --image=nginx --namespace=sales -
Run the following command to create a context for Joe that should limit Joe’s permissions to a custom namespace named sales:
kubectl config set-context _joe_-context --namespace=sales --user="_joe_" --cluster="minikube"[Enter the command in 1 line]
Notice it shows the message: Context "joe-context" created.
-
Create a context for Alice:
kubectl config set-context alice-context --namespace=sales --user="alice" --cluster="minikube"[Enter the command in 1 line]
Notice it shows the message: Context "alice-context" created.
-
What exactly happened when you executed the above two commands? Let’s find out. Run the following command to view the Kubernetes configuration:
kubectl config viewYou have 3 contexts. The default is the last one in the contexts section. The first context will let you connect to the minikube cluster as alice and the second one lets you connect to the minikube cluster as joe.
-
Let’s test the contexts. Run the following command to check the current context:
kubectl config current-contextNotice minikube is displayed which means you are using the cluster admin’s context and have access to all Kubernetes cluster resources.
-
Verify you can get the deployments list as the cluster-admin:
kubectl get deployments -n=sales -
Even as a cluster admin, you can run commands as a different user without actually switching the context. Run the following command to get the deployments list as Joe:
kubectl get deployments -n=sales --as joe-contextNotice it’s forbidden to access the resources as joe.
The alternative to the above command is:
kubectl get deployments --context=joe-context -
If you don’t want to pass --as <user> with each command, you can switch the context to another user. Run the following command to use the context created for Joe:
kubectl config use-context joe-contextNotice it shows the message:
Switched to context "joe-context". -
Verify the current context is set to joe-context:
kubectl config get-contextsYou should see:
-
Try viewing existing deployments as Joe:
kubectl get deploymentsNotice it shows an error: the server doesn’t have a resource type "deployments"
The error means the current context, user, doesn’t have access to the deployments resource type.
Also notice that you didn’t have to specify --namespace=sales since the context is already configured to use the sales namespace by default.
-
Switch back to the default, minikube, context so you can run commands as the cluster-admin:
kubectl config use-context minikubeEnsure the minikube context is set.
In the next few steps, you will use an alternative technique to check if a user/context has access to the sales namespace.
-
Check if the cluster-admin has permission to access the sales namespace:
kubectl auth can-i list pods --namespace salesEnsure it says yes
-
Get the pods list as the cluster-admin.
kubectl get pods -n=salesYou should be able to see the nginx deployment-based pod.
-
Check if joe has permission to access the sales namespace:
kubectl auth can-i list pods --namespace sales --as _joe_Notice it says no
You are yet to authorize the users to access the sales namespace. You will do that in the next part.
8.7. Authorize the User to Access the Namespace
In this part, you will create Role and RoleBinding to authorize Joe to access the sales namespaces.
-
View the contents of reader-role.yaml file:
cat reader-role.yamlthe configuration file creates a custom sales-reader role in the sales namespace with the read-only permission to access pods, services, and deployments. The contents of the file are listed here for reference. kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: sales name: sales-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods", "services", "deployments"] verbs: ["get", "watch", "list"] -
Run the YAML configuration file to create a Role:
kubectl apply -f reader-role.yaml -
Run the following command to verify a custom sales-reader role has been created in the sales namespace:
kubectl get roles --namespace=salesNotice it shows the following output:
NAME CREATED AT sales-reader <date_and_time_stamp> -
View the contents of reader-rolebinding.yaml file:
cat reader-role-binding.yamlThe configuration file creates a custom role-binding in the sales namespace and binds the user Joe to the custom sales-reader role. kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: sales-read-access namespace: sales subjects: - kind: User name: joe apiGroup: "" roleRef: kind: Role name: sales-reader apiGroup: "" -
Run the YAML configuration file to create a role binding:
kubectl apply -f reader-role-binding.yamlUsing the YAML configuration technique is the best practice. But, if you want to do the same thing without using the YAML configuration file you can also do it imperatively using the kubectl CLI.
-
Run the following command to verify a custom role binding has been created in the sales namespace:
kubectl get rolebindings --namespace=salesNotice it shows the following output:
NAME ROLE AGE sales-read-access Role/sales-reader <time> -
Verify you cannot access the sales namespace as Alice:
kubectl get pods --namespace sales --as alice -
Verify you can access the sales namespace as Joe:
kubectl get pods --namespace sales --as joe -
Switch to Joe’s context so we can verify it’s a read-only role that has been assigned to Joe:
kubectl config use-context joe-context -
Try to perform another deployment and verify you cannot do it as Joe.
kubectl create deployment nginx2 --image=nginxNotice it shows the following error message:
Error from server (Forbidden): deployments.apps is forbidden: User "joe" cannot create resource "deployments" in API group "apps" in the namespace "sales"
-
Let’s promote Joe’s role to a deployment manager so he can perform deployments. Before you can do that, you need to switch to the cluster admin context:
kubectl config use-context minikube -
View deployment-manager-role.yaml:
cat deployment-manager-role.yamlThe configuration file creates a custom deployment-manager role in the sales namespace with permissions to manage deployments. The contents are the file are listed here for reference. kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: sales name: deployment-manager rules: - apiGroups: ["", "extensions", "apps"] resources: ["deployments", "replicasets", "pods"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] z -
Apply the YAML configuration to create a custom deployment manager role:
kubectl apply -f deployment-manager-role.yaml -
View deployment-manager-rolebinding.yaml:
cat deployment-manager-rolebinding.yamlThe configuration file creates a custom role-binding in the sales namespace and binds the user Joe to the custom deployment-manager role. kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: deployment-manager-binding namespace: sales subjects: - kind: User name: joe apiGroup: "" roleRef: kind: Role name: deployment-manager apiGroup: "" -
Apply the YAML configuration to create a custom deployment manager role:
kubectl apply -f deployment-manager-rolebinding.yaml -
Switch back to Joe’s context and see if we can perform a deployment:
kubectl config use-context joe-context -
Perform a deployment:
kubectl create deployment nginx2 --image=nginxEnsure you use nginx2 since you already have a deployment named nginx.
-
Delete the deployments as Joe:
kubectl delete deployment nginx kubectl delete deployment nginx2 -
Switch back to the main cluster-admin context:
kubectl config use-context minikube
8.8. Assign ClusterRole
In the previous part, you authorized Joe and Alice to access the sales namespace. Likewise, you can authorize users to access additional namespaces. In case, if you want a user to have access to the cluster-level resources, such as nodes, you have to use CluserRole. In this part, you will assign ClusterRole to a user so the entire cluster can be managed.
-
Try accessing nodes as a cluster-admin:
kubectl get nodesIt should show a node like this:
-
Try accessing the nodes as Joe:
kubectl get nodes --as joeYou will get Error from server (Forbidden): nodes is forbidden: User "joe" cannot list resource "nodes" in API group "" at the cluster scope
-
View the cluster-role.yaml file:
cat cluster-role.yamlThe configuration file creates a custom cluster-node-reader role with a read-only permission to the cluster. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: # "namespace" omitted since ClusterRoles are not namespaced name: cluster-node-reader rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "watch", "list"] -
Run the YAML configuration file to assign a cluster-level role to Joe:
kubectl apply -f cluster-role.yaml -
Verify a cluster role has been created:
kubectl get clusterroles cluster-node-reader -
View cluster-role-binding.yaml:
cat cluster-role-binding.yamlThe configuration file creates a custom role-binding named read-cluster-nodes and binds the user Joe to the custom cluster-node-reader role. kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-cluster-nodes subjects: - kind: User name: joe apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-node-reader apiGroup: rbac.authorization.k8s.io -
Apply the YAML configuration file to create a cluster role binding, i.e. authorize Joe to perform operations on nodes:
kubectl apply -f cluster-role-binding.yaml -
Verify the cluster role binding has been created:
kubectl get clusterrolebindings read-cluster-nodes -
Access the nodes list as Joe and ensure you can view the node information:
kubectl get nodes --as joe -
Access the nodes list as Alice and ensure you cannot view the node information:
kubectl get nodes --as alice
8.9. Clean-Up
In this part, you will delete all custom roles, rolebindings, and users that you created in the previous parts of this lab.
-
Switch to the main cluster admin context:
kubectl config use-context minikube -
Delete all roles and rolebindings by running the following command:
kubectl delete -f .Don’t forget the dot at the end.
-
Delete all custom contexts:
kubectl config unset contexts.alice-context kubectl config unset contexts.joe-context -
Delete all custom users:
kubectl config unset __user__s.alice kubectl config unset __user__s._joe_ -
Type exit to close the Terminal.
8.10. Review
In this lab, you explored how to implement RBAC security in a Kubernetes cluster.
9. Accessing Applications
Kubernetes is designed to easily run and scale multi-part applications. For that to happen it needs to allow the different parts of an application to communicate. In this lab we’ll explore how the underlying Kubernetes network makes this possible.
We will start by deploying two applications - to keep things simple we will use a basic nginx web server image for these apps. Each of these runs on a linux image that we can shell into to show the access we have to our apps from within the kubernetes cluster. Later we will expose the apps so they can be accessed from outside the cluster.
Make sure you run the following command:
docker login -u \{your-docker-id} -p \{your-access-token}
9.1. Setting the Stage
-
Open a new Terminal window.
-
Ensure you can access the Docker CLI by running the following:
docker ps -
In the Terminal, navigate to the Works directory you created earlier:
cd ~/Works/ -
Check minikube status:
minikube statusYou should see that minikube is Running:
If minikube is not running then start it using this command:
minikube start
9.2. Create Deployment NX1
You will deploy the nginx application and give it the name 'nx1'.
We will refer to the current terminal as nx1
-
Run the following command to create the nx1 deployment:
kubectl create deployment nx1 --image nginx:latest -
Verify that the deployment has been added:
kubectl get deployments -owideYou should see:
-
Check that a pod was created for the deployment and is running:
kubectl get pods -owideYou should see:
-
Open a shell into the Pod. Make sure to use the name listed in the pod command from the previous step - yours will differ from the one you see here:
kubectl exec -it nx1-5b7c656d97-gz4g6 -- /bin/bashThis will open up a command prompt (again, your prompt name will differ):
root@nx1-5b7c656d97-gz4g6:/# -
Execute the following commands at the prompt to install the 'nano' editor:
apt-get update apt-get install nanoIn the next few commands we will modify nginx’s index.html file and customize it for this instance of the application.
-
Execute the following command. It will open index.html up in the nano editor:
nano /usr/share/nginx/html/index.html -
Use Ctrl-K repeatedly to delete all the current contents of the file.
-
Type the following into the file:
Welcome from NX1! -
Enter these commands in this order to save the file:
Ctrl-x y [hit Enter] -
Check out the file and verify that the changes were saved:
cat /usr/share/nginx/html/index.htmlIt should return:
Welcome from NX1! -
Now we will try calling the nginx server using the curl command:
curl http://localhost/index.htmlThe nginx web server should return:
Welcome from NX1! -
Leave this terminal window open, as is.
9.3. Create Deployment NX2
Deploy the nginx application and give it the name 'nx2'.
-
Open a new Terminal window.
-
We will refer to this terminal as NX2
-
-
FYI: The rest of the following instructions are similar to those in the previous section except for the name 'nx2'. Also, all these instructions should be entered into the 'NX2' terminal except where otherwise noted.
-
Run the following command to create the nx2 deployment:
kubectl create deployment nx2 --image nginx:latest -
Verify that the deployment has been added:
kubectl get deployments -owideYou should see:
-
Check that a pod was created for the deployment and is running:
kubectl get pods -owideYou should see:
-
Open a shell into the Pod. It’s the one with the nx2 prefix. Make sure to use the name listed in the pod command from the previous step - yours will differ from the one you see here:
kubectl exec -it nx2-5664f9b5fb-mclqr -- /bin/bashThis will open up a command prompt (again, yours will differ):
root@nx2-5664f9b5fb-mclqr:/# -
Execute the following commands at the prompt to install the 'nano' editor:
apt-get update apt-get install nano [Press Y when prompt to continue]In the next few commands we will modify nginx’s index.html file and customize it for this instance of the application.
-
Execute the following command. It will open index.html up in the nano editor:
nano /usr/share/nginx/html/index.html -
Use Ctrl-K repeatedly to delete all the current contents of the file.
-
Type the following into the file:
Welcome from NX2! -
Enter these command in this order to save the file:
Ctrl-x y [Press Enter] -
Check out the file and verify that the changes were saved:
cat /usr/share/nginx/html/index.htmlIt should return:
Welcome from NX2! -
Now we will try calling the nginx server using the curl command:
curl http://localhost/index.htmlThe nginx web server should return:
Welcome from NX2!
9.4. Check Current Network Situation
In this part we will check to see what sort of visibility we have between the two deployments. We will check to see what was created for each deployment as well.
-
Open a third Terminal window.
-
We will refer to this terminal as kubectl
-
-
Run this command to get information about the node that Kubernetes is running on:
kubectl describe nodes -
Scroll down until you come to this line "Non-terminated Pods: …". You should see pods in the default namespace for the two deployments, nx1 & nx2. This is evidence that the two pods are running on the same node.
Non-terminated Pods: (13 in total) Namespace Name CPU Requests CPU Limits --------- ---- ------------ ---------- default nx1-5b7c656d97-gz4g6 0 (0%) 0 (0%) default nx2-5664f9b5fb-mclqr 0 (0%) 0 (0%) kube-system coredns-5c98db65d4-5s5zq 100m (5%) 0 (0%) -
Run this command to describe the nx1 pod, remember your pod names will be different:
kubectl describe pods nx1-5b7c656d97-gz4g6Below are some of the more important lines and sections when it comes to networking. Try to locate them in your own output:
The IP address listed here is the address where the pod can be found inside the cluster. We can also find that information with the 'kubectl get pods -owide' command.
Take note of the IP address. You will use this in a later step. Yours will be different from the screenshot here.
-
Call the following command and check the IP address for nx1:
kubectl get pods -owideThis returns:
How can we verify that the IP we see here points to nx1? Well it should point to the container where we are running our first instance of nginx. so, we can try using it to make a call to nginx.
-
Switch to the Terminal named as *nx1 *(you can always identify the terminal by looking at the prompt. The prompt in nx1 starts with "root@nx1-……".
-
Remember the 'nx1' terminal is running a shell inside the cluster. We can try calling our web server from there (replace the IP with the IP in your VM):
curl http://10.244.0.47It should return:
Welcome to NX1!Here we are making the call from the nx1 pod and accessing the nginx web server application in the same pod.
-
Switch to the Terminal named as nx2.
-
Try calling our web server again (replace the IP with the IP in your VM):
curl http://10.244.0.47This should also return:
Welcome to NX1!In this case we are making the call from the nx2 pod and it is successfully communicating with the nginx application in the nx1 pod! This is evidence that pods have their own IP addresses within the cluster and that the pods can see each other. By the way we could do the same with nx2, if you try calling it based on its IP address it should return the index.html page we customized for nx2.
Imagine you had a RESTful web service running on one pod and a related data store in another pod. The web service would see the pod with the data store and would be able to pull data from it as needed.
It’s great that applications in the various pods can see each other but what if we want to access the application from outside the cluster? Let’s give it a try.
-
Switch to the terminal named as kubectl. (This is the third terminal. Since you didn’t shell into either of the pods, the prompt here should start with "wasadmin@…")
-
The shell here is NOT inside the cluster. It is outside. How do we know? Try running this command:
kubectl cluster-infoIt brings back information about the cluster itself. This would not have worked from inside the cluster. The kubectl command always runs from outside the cluster.
9.5. Exposing Pods Outside the Cluster
In this section we will add services to expose our nginx pods to the network outside the cluster.
-
Switch to the terminal named as kubectl.
-
Change directory to Works:
cd ~/Works -
Create and edit a file named nx1-service.yaml using the following command:
code nx1-service.yamlThis will bring up the gedit gui editor.
-
Enter the following text into the editor:
apiVersion: v1 kind: Service metadata: name: nx1 spec: selector: app: nx1 ports: - port: 80 protocol: TCP type: NodePort externalTrafficPolicy: Cluster -
Save the file and close the editor.
-
Run the following command. It will create a Kubernetes service for nx1:
kubectl apply -f nx1-service.yamlThe response should be:
service/nx1 created -
Check that the service exists with this command:
kubectl get services nx1This should return (your cluster-ip value might be different):
Take note of the port. In the case above, it is 32275. Yours will be different.
-
Get the cluster ip:
minikube ipIt returns (your minikube ip might be different.):
-
Make a call to the port you noted using your minikube ip: (your port number may be different. Refer to the output above to check the port number)
curl http://**<Minikube IP>:NodePortThis should return:
This is evidence of accessing nx1 from outside the cluster!
9.6. Cleanup
-
Close the nx1 and nx2 terminal windows by using the command exit as many times as needed.
-
Delete the nx1 service from the kubectl terminal:
kubectl delete service nx1 -
Delete both deployments from the kubectl terminal:
kubectl delete deployment nx1 kubectl delete deployment nx2 -
Use exit to close the kubectl terminal
9.7. Review
In this lab we deployed two applications. The deployment process created pods within which ran containers where instances of the nginx web server were running. We gained shell access to the containers and were able to call the servers from within the cluster network.
The nginx servers were not accessible from outside the cluster network until we created kubernetes services. Then we were able to access each nginx server off separate ports on the cluster IP address.
10. Troubleshooting
When deploying objects to Kubernetes things can go wrong in various ways. In this lab we will take a look at some errors and how to fix them. We will troubleshoot various issues that come up when deploying an application including:
-
Yaml Syntax Exceptions
-
nodeSelector Issues
-
Failed Image Pull
-
Resource Issues while Scaling
For each issue we will see how the issue appears, look into errors and warning, fix the issue and re-test to verify the fix.
Make sure you run the following command:
docker login -u \{your-docker-id} -p \{your-access-token}
10.1. Setting the Stage
-
Open a new Terminal window.
-
Ensure you can access the Docker CLI by running the following:
docker ps -
In the Terminal, navigate to the Works directory you created earlier:
cd ~/Works/ -
Check minikube status:
minikube statusYou should see that minikube is Running:
If minikube is not running then start it using this command:
minikube start
10.2. Deploy an Application and Fix Yaml Syntax
In this section we will work to deploy an application based on an nginx web server.
-
Create a deployment specification using VSCode:
code webapp-deployment.yaml -
Enter the following content into the file. Enter the text EXACTLY as it is shown here. Some lines contain error on purpose so that we can demonstrate those errors and how to troubleshoot them:
# app-deployment.yaml # v0 apiVersion: apps/v1 kind: Deployment metadata: name: webapp-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: nodeSelector: kubernetes.io/hostname: mycluster containers: - name: webapp image: ngix:latest ports: - containerPort: 80 env: resources: requests: memory: "64Mi" cpu: "125m" limits: memory: "128Mi" cpu: "250m" -
Save and close the file.
-
Apply the deployment yaml file:
kubectl apply -f webapp-deployment.yamlAn error message will be returned:
Error from server (BadRequest): error when creating "webapp-deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "metadata.app"One part of the message is most important to us: 'unknown field "app"' This tells us that there is a problem in the yaml file related to the text "app" in the metadata section of the file.
-
Open up webapp-deployment.yaml in a text editor:
code webapp-deployment.yaml -
Take a look at the metadata section and locate the text 'app'. The section should look like this:
metadata: name: webapp-deployment labels: app: nginxThe problem is that 'app: nginx' is a label and based on yaml syntax it should be indented from the 'labels:' tag above it. Add an indentation (two spaces should do it.) like this:
metadata: name: webapp-deployment labels: app: nginx -
Make the change shown above then save the yaml file and exit the editor.
-
Re-run the failed apply statement:
kubectl apply -f webapp-deployment.yamlYou should now get the output:
deployment.apps/webapp-deployment createdOur first problem, a YAML syntax issue, has been fixed.
10.3. Troubleshooting a nodeSelector Issue.
We completed our last section by deploying the webapp-application. It is time now for us to check and make sure the deployment completed as expected. Some of the automated steps involved include:
-
Deploying Pods
-
Pulling images
-
Creating containers
If something goes wrong with any of these things the application will be unusable.
-
Check on deployments:
kubectl get deploymentsYou should see the following output:
The READY column shows that zero of one pods were deployed.
-
Let’s verify that by checking out the pods:
kubectl get podsYou should see the following output: (your pod name will be different):
The READY column is indeed the same here and the STATUS is 'pending'
-
Drill down and get more details by describing the pod listed above (use the pod name from your machine):
kubectl describe pods webapp-deployment-5956985478-76b94 -
Scroll down through the output until you see the 'Conditions:' section:
Conditions: Type Status PodScheduled FalseYou can see here that PodScheduled is False. This means that the scheduler could not identify a node to run the pod on.
-
Scroll down some more to find the 'Events:' section.
Here you see a warning with the message: '1 node(s) didn’t match node selector'. This must mean that a node selector was set in the yaml file but the conditions it requested were not met by any active nodes.
-
Open the yaml in the editor again:
code webapp-deployment.yaml -
Search for 'nodeSelector'. You should find the following section:
nodeSelector: kubernetes.io/hostname: myclusterThe node selector holds a single name-value pair that is supposed to match a label in one of the running nodes. Our current cluster has a single node named 'minikube'. Let’s check the minikube node to see if it has this label.
-
Leave the editor open and switch back to the terminal window.
-
Describe the node:
kubectl describe nodes -
Scroll down in the command output to find the "Labels:" section. It should look like this:
The fourth label looks similar to the one we saw in the nodeSelector but there the value was 'mycluster' and here it is 'minikube':
kubernetes.io/hostname=__minikube__ -
Go back to the YAML file in the editor and set the node selector to match the label from the node:
nodeSelector: kubernetes.io/hostname: minikube -
Save the yaml file and close it.
-
Apply the updated yaml deployment specification file:
kubectl apply -f webapp-deployment.yamlTake note that we did not delete the existing deployment before applying the update. This can cause all kinds of problems, one of which we will see soon.
-
Check the pods:
kubectl get podsNow there are two pods. The first one is 'pending' and is still trying to be deployed to a node that that matches the incorrect label in its specification. That pod was created from the previous flawed deployment. Trying to delete the old pod will be problematic because Kubernetes will keep trying to restart it even though it is broken.
The easiest way to fix this is to delete the initial deployment and recreate it.
-
Remove the deployment with this command:
kubectl delete -f webapp-deployment.yaml -
Verify that the deployment and pods were removed using the following commands:
kubectl get deployments kubectl get podsYou may need to wait until the pod is completely removed.
-
Apply the updated yaml again:
kubectl apply -f webapp-deployment.yaml -
Check the deployment:
kubectl get deploymentsYour output should look like this:
READY indicates that zero of one pods were deployed.
-
Let’s check the pods:
kubectl get podsAlthough the status is no longer 'Pending' and the issue with the node selector has been fixed a new problem has come up based on the new status - ImagePullBackOff. In the next section we will take a closer look and troubleshoot this new issue.
10.4. Troubleshooting a Failed Image Pull
In this section we will pick up where we left off in the last section and continue troubleshooting the ImagePullBackOff issue.
-
Start by listing the pods:
kubectl get pods -
Describe the pod shown in the previous command (use the pod name specific to you as it will be different from the one listed here):
kubectl describe pods webapp-deployment-5944b5866d-2r24d -
Scroll down in the output to the 'Conditions:' section:
Here we see that the pod has been initialized and scheduled but is not yet ready.
-
For more information we will scroll down further and check out the 'Events:' section:
The information we need is listed in the third item:
Failed to pull image "ngix:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngix, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Here is says that the kubelet failed to pull the requested "ngix:latest " image. This looks like a typo where 'ngix' was entered instead of the correct value 'nginx'
-
First delete the deployment:
kubectl delete -f webapp-deployment.yaml -
Open the deployment YAML file in your editor.
code webapp-deployment.yaml -
Find the string 'ngix' and replace it with 'nginx'.
-
Save the file and close the editor.
-
Run the following command:
kubectl apply -f webapp-deployment.yaml -
Try running "kubectl get deployments" a few times until the READY field shows up as "1/1". This may take a minute or two. This means that the pod was finally deployed successfully.
-
Get the pod name:
kubectl get pods -
Describe the pod (use your pod’s name):
kubectl describe pods webapp-deployment-7cb4745588-glpmtScroll down until you see the 'Conditions:' section:
The output shows that the pod has been scheduled, initialized and is ready as is the container.
The 'Events:' section tells the same story.
This is what a successful pod deployment looks like.
10.5. Troubleshooting Resource Issues when Scaling
Now that we have the deployment running, we will try scaling it up from 1 to 20 replicas.
-
Execute the following commands to scale the deployment:
kubectl scale --current-replicas=1 --replicas=20 deployment webapp-deployment [Enter the command in 1 line]You should see the output:
deployment.apps/webapp-deployment scaled -
Scaling should affect the number of pods so we will check that:
kubectl get podsYou should see the output:
Notice that many the pods are ready except many others are READY=0/1 and STATUS is 'pending'. We need to find out why those pods are not running.
-
Pick a pod that is not running and use kubectl describe on it (your pod name will be different):
kubectl describe pods webapp-deployment-7cb4745588-zx9bn -
Under the 'Conditions:' section you will find that the pod has not been scheduled:
Conditions: Type Status PodScheduled False -
Under the 'Events:' section you will find this warning:
Warning FailedScheduling 112s default-scheduler 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
-
This is a resource issue. We can verify the problem by checking the node to see how much resources it has. Use the following command:
kubectl describe nodes -
Find the 'Non-terminated Pods:' section in the output. You will see that each pod from our deployment is taking about 6% of the total CPU resource (125m):
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---------- ------------ ---------- --------------- ------------- --- default webapp-... 125m (6%) 250m (12%) 64Mi (3%) 128Mi (6%) 16m -
Now check the "Allocated resources" section:
Here we see that 98% of the CPU is already allocated (your numbers might be slightly different) which means that there is not enough left for more pods. We have two choices here:
-
Increase the CPU resource allocated to the nodes VM.
-
Get by less replicas for the time being.
-
Reduce the amount of CPU requested by each pod.
For now let’s rescale the deployment down from 20 to 9 replicas.
-
Execute the following scale command in 1 line:
kubectl scale --current-replicas=20 --replicas=9 deployment webapp-deploymentThe output should say:
deployment.extensions/webapp-deployment scaled -
Check the pods again:
kubectl get podsYou will see that there are now only 9 pods and all of them are running.
If you wanted instead to reduce the amount of CPU used by each container you would modify the following section in the webapp-deployment.yaml:
resources: requests: memory: "64Mi" cpu: "125m" limits: memory: "128Mi" cpu: "250m"Reducing the request CPU from 125 to 100 should allow for more pods to be scheduled.
resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "250m"After this change you would need to delete the current deployment and recreate it with the updated yaml. At that point you would be able to scale the deployment up to 10 replicas without running out of space.
-
Type exit until the Terminal is closed.
10.6. Review
In this lab we troubleshot issues that come up when deploying an application including:
-
Yaml Syntax Exceptions
-
nodeSelector Issues
-
Failed Image Pull
-
Resource Issues while Scaling
For each issue we saw how the issue appeared, looked into errors and warning, fixed the issue and re-tested to verify the fix.
© 2025 Web Age Solutions, LLC
Revision 4.0.1 published on 2025-01-07.
All rights reserved. No part of this book may be reproduced or used in any form or by any electronic, mechanical, or other means, currently available or developed in the future, including photocopying, recording, digital scanning, or sharing, or in any information storage or retrieval system, without permission in writing from the publisher.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
To obtain authorization for any such activities (e.g., reprint rights, translation rights), to customize this book, or for other sales inquiries, please contact:
Web Age Solutions, LLC
1 California Street Suite 2900
San Francisco, CA 94111
https://www.webagesolutions.com
USA: 1-877-517-6540, email: getinfousa@webagesolutions.com
Canada: 1-877-812-8887 toll free, email: getinfo@webagesolutions.com